Goto

Collaborating Authors

 weak point


Why the Louvre heist doesn't surprise museum security experts

Popular Science

It's often more'smash and grab' than'Mission: Impossible.' French police officers stand next to a furniture elevator used by robbers to enter the Louvre Museum, on Quai Francois Mitterrand, in Paris on October 19, 2025. Robbers broke in to the Louvre and fled with jewellery on October 19, 2025 morning, a source close to the case said, adding that its value was still being evaluated. A police source said an unknown number of thieves arrived on a scooter armed with small chainsaws and used a goods lift to reach the room they were targeting. Breakthroughs, discoveries, and DIY tips sent every weekday. A heist at a world famous museum likely evokes images of stealthy cat burglars skulking at night armed with state-of-the-art gadgets, possibly even soundtracked with a cool, jazzy instrumental.


Concealment of Intent: A Game-Theoretic Analysis

Wu, Xinbo, Umrawal, Abhishek, Varshney, Lav R.

arXiv.org Artificial Intelligence

As large language models (LLMs) grow more capable, concerns about their safe deployment have also grown. Although alignment mechanisms have been introduced to deter misuse, they remain vulnerable to carefully designed adversarial prompts. In this work, we present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills. We develop a game-theoretic framework to model the interaction between such attacks and defense systems that apply both prompt and response filtering. Our analysis identifies equilibrium points and reveals structural advantages for the attacker. To counter these threats, we propose and analyze a defense mechanism tailored to intent-hiding attacks. Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors, demonstrating clear advantages over existing adversarial prompting techniques.


DenMune: Density peak based clustering using mutual nearest neighbors

Abbas, Mohamed, El-Zoghobi, Adel, Shoukry, Amin

arXiv.org Artificial Intelligence

Many clustering algorithms fail when clusters are of arbitrary shapes, of varying densities, or the data classes are unbalanced and close to each other, even in two dimensions. A novel clustering algorithm, DenMune is presented to meet this challenge. It is based on identifying dense regions using mutual nearest neighborhoods of size K, where K is the only parameter required from the user, besides obeying the mutual nearest neighbor consistency principle. The algorithm is stable for a wide range of values of K. Moreover, it is able to automatically detect and remove noise from the clustering process as well as detecting the target clusters. It produces robust results on various low and high-dimensional datasets relative to several known state-of-the-art clustering algorithms.


The human being, the weak point of Artificial Intelligence

#artificialintelligence

Magic formulas for some or mathematical formulas full of future for others, algorithms are far from being infallible. Yet today, most of them lead to decisions that influence many companies and even human lives. Denis Molin -- consultant at TeraData, a technology company specialized in database analysis and Big Data software -- puts into perspective the biases that humans generate in AI. Cathy O'Neil was one of the first to warn about these dangers in her book Algorithms, the time bomb published in 2016. Buried inside the algorithms, intentional or unintentional biases can lead to bad interpretations of data and ultimately to bad decisions. Especially since these algorithms are much more important than they appear, since artificial intelligence is based on self-learning algorithms that evolve over time, depending on the data they are provided with.


Artificial Intelligence May Soon Predict How Electronics Fail - ELE Times

#artificialintelligence

In the latest study, researchers mapped out the physics of small building blocks made up of atoms, then used machine learning techniques to estimate how larger structures created from those same building blocks might behave. It's a bit like looking at a single Lego brick to try to predict the strength of a much larger castle. It's a pursuit that could be a boon for the electronics that underpin our daily lives, from smartphones and electric cars to emerging quantum computers. One day, engineers could use the team's methods to pinpoint in advance weak points in the design of electronic components. The project is part of a larger focus on how the world of very small things, such as the wiggling of atoms, can help people build new and more efficient computers--even ones that take their inspiration from human brains. Artem Pimachev, a research associate in aerospace engineering at CU Boulder, is a co-author of the new study.


Understanding Spatial Robustness of Deep Neural Networks

Zhong, Ziyuan, Tian, Yuchi, Ray, Baishakhi

arXiv.org Artificial Intelligence

Deep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention. While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g. a rotated or a rainy version of the original input, are especially concerning as they can occur naturally in the field without any active adversary and may lead to undesirable consequences. Thus, it is important to identify the inputs whose small variations may lead to erroneous DNN behaviors. The very few studies that looked at DNN's robustness under natural variants, however, focus on estimating the overall robustness of DNNs across all the test data rather than localizing such error-producing points. This work aims to bridge this gap. To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DEEPROBUST-W) and a black-box (DEEPROBUST-B) tool to automatically identify the non-robust points. Our evaluation of these methods on nine DNN models spanning three widely used image classification datasets shows that they are effective in flagging points of poor robustness. In particular, DEEPROBUST-W and DEEPROBUST-B are able to achieve an F1 score of up to 91.4% and 99.1%, respectively. We further show that DEEPROBUST-W can be applied to a regression problem for a self-driving car application.


Why Artificial Intelligence Will Save Cybersecurity? - CIOL

#artificialintelligence

Cybersecurity is in a dire state. A few years ago, 70 million Target customers were affected by a large-scale cyber attack. Target's CIO was immediately let go in the wake of such an unthinkable disaster. Now 70 million seems like a tame number after revelations of devastating cybersecurity incidents like the Equifax breach, which affected 143 million Americans, and the Yahoo account debacle came to light. Each attack compromised millions upon millions, maybe even billions, of citizens.


How to Leverage AI Recruiting to Make Better Hires - TalentCulture

#artificialintelligence

HR and recruiters don't tend to take things at face value. For good reason: we're called on to rely on our educated judgments. We find the best talent with the most potential for doing great things for an employer in the near future, and we do it over and over again. But we've been up for a turbocharge for a long time. A career path that is this intense, combining administrative, personal, and strategic tasking constantly needs sophisticated ways to advance above old archaic practices we no longer want to rely on. With AI, we have it.


Clustering with Confidence: Finding Clusters with Statistical Guarantees

Henelius, Andreas, Puolamäki, Kai, Boström, Henrik, Papapetrou, Panagiotis

arXiv.org Machine Learning

Clustering is a widely used unsupervised learning method for finding structure in the data. However, the resulting clusters are typically presented without any guarantees on their robustness; slightly changing the used data sample or re-running a clustering algorithm involving some stochastic component may lead to completely different clusters. There is, hence, a need for techniques that can quantify the instability of the generated clusters. In this study, we propose a technique for quantifying the instability of a clustering solution and for finding robust clusters, termed core clusters, which correspond to clusters where the co-occurrence probability of each data item within a cluster is at least $1 - \alpha$. We demonstrate how solving the core clustering problem is linked to finding the largest maximal cliques in a graph. We show that the method can be used with both clustering and classification algorithms. The proposed method is tested on both simulated and real datasets. The results show that the obtained clusters indeed meet the guarantees on robustness.